Observational studies have recently received significant attention from the machine learning community due to the increasingly available non-experimental observational data and the limitations of the experimental studies, such as considerable cost, impracticality, small and less representative sample sizes, etc. In observational studies, de-confounding is a fundamental problem of individualised treatment effects (ITE) estimation. This paper proposes disentangled representations with adversarial training to selectively balance the confounders in the binary treatment setting for the ITE estimation. The adversarial training of treatment policy selectively encourages treatment-agnostic balanced representations for the confounders and helps to estimate the ITE in the observational studies via counterfactual inference. Empirical results on synthetic and real-world datasets, with varying degrees of confounding, prove that our proposed approach improves the state-of-the-art methods in achieving lower error in the ITE estimation.
translated by 谷歌翻译
Climate change has increased the intensity, frequency, and duration of extreme weather events and natural disasters across the world. While the increased data on natural disasters improves the scope of machine learning (ML) in this field, progress is relatively slow. One bottleneck is the lack of benchmark datasets that would allow ML researchers to quantify their progress against a standard metric. The objective of this short paper is to explore the state of benchmark datasets for ML tasks related to natural disasters, categorizing them according to the disaster management cycle. We compile a list of existing benchmark datasets introduced in the past five years. We propose a web platform - NADBenchmarks - where researchers can search for benchmark datasets for natural disasters, and we develop a preliminary version of such a platform using our compiled list. This paper is intended to aid researchers in finding benchmark datasets to train their ML models on, and provide general directions for topics where they can contribute new benchmark datasets.
translated by 谷歌翻译
Data compression is becoming critical for storing scientific data because many scientific applications need to store large amounts of data and post process this data for scientific discovery. Unlike image and video compression algorithms that limit errors to primary data, scientists require compression techniques that accurately preserve derived quantities of interest (QoIs). This paper presents a physics-informed compression technique implemented as an end-to-end, scalable, GPU-based pipeline for data compression that addresses this requirement. Our hybrid compression technique combines machine learning techniques and standard compression methods. Specifically, we combine an autoencoder, an error-bounded lossy compressor to provide guarantees on raw data error, and a constraint satisfaction post-processing step to preserve the QoIs within a minimal error (generally less than floating point error). The effectiveness of the data compression pipeline is demonstrated by compressing nuclear fusion simulation data generated by a large-scale fusion code, XGC, which produces hundreds of terabytes of data in a single day. Our approach works within the ADIOS framework and results in compression by a factor of more than 150 while requiring only a few percent of the computational resources necessary for generating the data, making the overall approach highly effective for practical scenarios.
translated by 谷歌翻译
We seek to impose linear, equality constraints in feedforward neural networks. As top layer predictors are usually nonlinear, this is a difficult task if we seek to deploy standard convex optimization methods and strong duality. To overcome this, we introduce a new saddle-point Lagrangian with auxiliary predictor variables on which constraints are imposed. Elimination of the auxiliary variables leads to a dual minimization problem on the Lagrange multipliers introduced to satisfy the linear constraints. This minimization problem is combined with the standard learning problem on the weight matrices. From this theoretical line of development, we obtain the surprising interpretation of Lagrange parameters as additional, penultimate layer hidden units with fixed weights stemming from the constraints. Consequently, standard minimization approaches can be used despite the inclusion of Lagrange parameters -- a very satisfying, albeit unexpected, discovery. Examples ranging from multi-label classification to constrained autoencoders are envisaged in the future.
translated by 谷歌翻译
人类广泛利用视觉和触摸作为互补的感官,视觉提供有关场景的全球信息,并在操纵过程中触摸当地信息而不会受到阻塞。在这项工作中,我们提出了一个新颖的框架,用于以一种自我监督的方式学习多任务视觉执行表示。我们设计了一种机制,该机制使机器人能够自主收集空间对齐的视觉和触觉数据,这是下游任务的关键属性。然后,我们使用交叉模式对比损失训练视觉和触觉编码器将这些配对的感觉输入嵌入共享潜在空间中。对学习的表示形式进行评估,而无需对5个感知和控制任务进行微调,涉及可变形表面:触觉分类,接触定位,异常检测(例如,手术幻影肿瘤触诊),触觉搜索,例如,视觉疑问(例如,在遮挡的情况下,都可以从视觉询问中进行触觉搜索),以及沿布边缘和电缆的触觉伺服。博学的表示形式在毛巾功能分类上达到了80%的成功率,手术材料中异常检测的平均成功率为73%,视觉引导触觉搜索的平均成功率和87.8%的平均伺服距离沿电缆和服装的平均伺服距离为87.8%。接缝。这些结果表明,学习的表示形式的灵活性,并朝着对机器人控制的任务不合时宜的视觉表达表示迈出了一步。
translated by 谷歌翻译
彼此接触的任何两个物体都会仅仅是由于重力或机械接触而引起的力,例如机器人手臂抓住一个物体,甚至是我们膝关节处的两个骨头之间的接触。自然测量和监视这些接触力的能力允许从仓库管理(基于重量检测错误包装)到机器人技术(使机器人臂的抓地力与人类皮肤一样敏感)和医疗保健(膝关节植入物)的大量应用。设计一个无处不在的力传感器是充满挑战的,该传感器可自然地用于所有这些应用。首先,传感器应足够小,以适合狭窄的空间。接下来,我们不想铺设笨重的电缆来读取传感器的力值。最后,我们需要进行无电池设计以满足体内应用程序。我们开发了WiforCesticker,这是一种无线,无电池,类似贴纸的力传感器,可以在任何表面上都可以无处不在,例如所有仓库包装,机器人手臂和膝关节。 WiforCesticker首先设计一个$ 4 $ 〜mm〜 $ \ $ \ times $〜$〜$ 2 $ 〜mm〜 $ \ $ \ times $〜$〜$〜$ 0.4 $〜毫米电容传感器设计,配备了$ 10 $〜$〜$〜$〜$〜$〜$〜$ 〜mm〜mm 〜mm 〜mm 〜mm在灵活的PCB基材上设计。其次,它引入了一种新的机制,可以通过将传感器与COTS RFID系统插入传感器,从而无线读取器无线读取器可以通过无线读取器读取力信息。该传感器可以在多个测试环境中检测到$ 0 $ -6 $ 〜n的力量,感应精度为$ <0.5 $ 〜n,并在传感器上使用超过10,000美元的$ 10,000 $变化的力级按下。我们还通过设计传感器展示了两个应用程序案例研究,称量仓库包和骨接头施加的传感力。
translated by 谷歌翻译
使用虚拟现实(VR)系统时,Cyber​​sickness的特征是恶心,眩晕,头痛,眼睛疲劳和其他不适。先前报道的机器学习(ML)和深度学习(DL)算法用于检测(分类)和预测(回归)VR Cyber​​sickness使用黑盒模型;因此,他们缺乏解释性。此外,VR传感器会产生大量数据,从而产生复杂的模型。因此,在Cyber​​sickness检测模型中具有固有的解释性可以显着提高该模型的可信度,并洞悉为什么ML/DL模型如何制定特定决定。为了解决此问题,我们提出了三个可解释的机器学习(XML)模型来检测和预测Cyber​​sickness:1)可解释的提升机(EBM),2)决策树(DT)和3)逻辑回归(LR)。我们通过公开可用的生理和游戏数据集评估了基于XML的模型。结果表明,EBM可以分别以99.75%和94.10%的精度检测Cyber​​sickness,分别为生理和游戏数据集检测到Cyber​​ness。另一方面,在预测Cyber​​sickness的同时,EBM导致生理数据集的均方根误差(RMSE)为0.071,游戏玩法数据集的根部误差(RMSE)为0.27。此外,基于EBM的全球解释揭示了曝光的长度,旋转和加速度作为在游戏玩法数据集中引起Cyber​​sickness的关键特征。相反,电流皮肤反应和心率在生理数据集中最为重要。我们的结果还表明,基于EBM的局部解释可以鉴定单个样本的引起网络核管的因素。我们认为,提出的基于XML的Cyber​​sickness检测方法可以帮助未来的研究人员理解,分析和设计更简单的Cyber​​sickness检测和还原模型。
translated by 谷歌翻译
当今世界受到新颖的冠状病毒(Covid-19)的严重影响。使用医疗套件来识别受影响的人非常慢。接下来会发生什么,没人知道。世界正面临不稳定的问题,不知道在不久的将来会发生什么。本文试图使用LSTM(长期记忆)对冠状病毒恢复病例进行预后。这项工作利用了258个地区的数据,其纬度和经度以及403天的死亡人数范围为22-01-2020至27-02-2021。具体而言,被称为LSTM的先进基于深度学习的算法对为时间序列数据(TSD)分析提取高度必不可少的特征产生了极大的影响。有很多方法已经用于分析传播预测。本文的主要任务最终在分析使用基于LSTM深度学习的体系结构分析冠状病毒在全球恢复案例中的传播。
translated by 谷歌翻译
全球一百多个国家的主食是大米(Oryza sativa)。大米的种植对于全球经济增长至关重要。但是,农业产业面临的主要问题是水稻疾病。农作物的质量和数量下降了,这是主要原因。由于任何国家的农民对水稻疾病都没有太多了解,因此他们无法正确诊断稻叶疾病。这就是为什么他们不能适当照顾米叶的原因。结果,生产正在减少。从文献调查中,Yolov5表现出更好的结果与其他深度学习方法相比。由于对象检测技术的不断发展,Yolo家族算法具有非常高的精度和更好的速度,已在各种场景识别任务中使用,以构建稻叶疾病监测系统。我们已经注释了1500个收集的数据集,并提出了基于Yolov5深学习的水稻疾病分类和检测方法。然后,我们训练并评估了Yolov5模型。模拟结果显示了本文提出的增强Yolov5网络的对象检测结果的改进。所需的识别精度,召回,MAP值和F1得分的水平分别为90 \%,67 \%,76 \%和81 \%\%被视为性能指标。
translated by 谷歌翻译
在这项工作中,借用编码器数据的学习框架的功能可以预测在漫长的未来视野中的软失败演化。这使得在发生昂贵的硬失败之前,可以通过低质量(QOT)利润来触发及时的维修操作,最终降低了维修操作的频率和相关的运营费用。具体而言,结果表明,所提出的方案能够在预期的艰苦失败前几天触发修复动作,与使用基于规则的固定QOT边缘的软失败检测方案相反,这可能会导致过早维修措施(即,在发生艰苦的事件发生之前的几个月)或修复为时已晚采取的措施(即发生艰苦失败之后)。在弹性光学网络中建立的LightPath评估并比较了两个框架,可以通过分析在相干接收器中监视的位误差信息来对软失败的演变进行建模。
translated by 谷歌翻译